Constraint Approach to Multi-Objective Optimization

نویسندگان

  • Martine Ceberio
  • Olga Kosheleva
  • Vladik Kreinovich
چکیده

In many practical situations, we would like to maximize (or minimize) several different criteria, and it is not clear how much weight to assign to each of these criteria. Such situations are ubiquitous and thus, it is important to be able to solve the corresponding multi-objective optimization problems. There exist many heuristic methods for solving such problems. In this paper, we reformulate multi-objective optimization as a constraint satisfaction problem, and we show that this reformulation explains two widely use multi-objective optimization techniques: optimizing a weighted sum of the objective functions and optimizing the product of normalized values of these functions. 1 Formulation of the Problem Multi-objective optimization: examples. In many practical situations, we would like to maximize several different criteria. For example, in meteorology and environmental research, it is important to measure fluxes of heat, water, carbon dioxide, methane and other trace gases that are exchanged within the atmospheric boundary layer. To perform these measurements, researchers build up vertical towers equipped with sensors at different heights; these tower are called Eddy flux towers. When selecting a location for the Eddy flux tower, we have several criteria to satisfy; see, e.g., [1, 5]: The station should located as far away as possible from roads, so that the gas flux generated by the cars do not influence our measurements of atmospheric fluxes. On the other hand, the station should be located as close to the road as possible, so as to minimize the cost of carrying the heavy parts when building such a station. The inclination at the station location should be small, because otherwise, the flux will be mostly determined by this inclination and will not be reflective of the atmospheric processes, etc. In geophysics, different type of data provide complementary information about the Earth structures. For example, information from the body waves (Pwave receiver functions) mostly covers deep areas, while the information about the Earth surface is mostly contained in surface waves. To get a good understanding of the Earth structure, it is therefore important to take into account data of different types; see, e.g., [3, 9]. If we had only one type of data, then we can use the usual Least Squares approach fi(x) → min to find a model that best fits the data. If we knew the relative accuracy of different data types, we could apply the Least Squares approach to all the data. In practice, however, we do not have a good information about the relative accuracy of different data types. In this situation, all we can say that we want to minimize the errors fi(x) corresponding to all the observations i. Multi-objective optimization is difficult. The difficulty with this problem is that, in contrast to a simple optimization, the problem of multi-objective optimization is not precisely defined. Indeed, if we want to minimize a single objective f(x) → min, this has a very precise meaning: we want to find an alternative x0 for which f(x0) ≤ f(x) for all other alternatives x. Similarly, if we want to maximize a single objective f(x) → max, this has a very precise meaning: we want to find an alternative x0 for which f(x0) ≥ f(x) for all other alternatives x. In contrast, for a multi-objective optimization problem f1(x) → min; f2(x) → min; . . . ; fn(x) → min (1) or f1(x) → max; f2(x) → max; . . . ; fn(x) → max, (2) no such precise meaning is known. Let us illustrate this ambiguity on the above trip example. In many cases, convenient direct flights which save on travel time are more expensive, while a cheaper trip may involve a long stay-over in between flights. So, if we find a trip that minimizes cost, the trip takes longer. Vice versa, if we minimize the travel time, the trip costs more. It is therefore necessary to come up with a way to find an appropriate compromise between several objectives. 2 Analysis of the Problem and Two Main Ideas Analysis of the problem. Without losing generality, let us consider a multiobjective maximization problem. In this problem, ideally, we would like to find an alternative x0 that satisfies the constraints fi(x0) ≥ fi(x) for all objectives i and for all alternatives x. In other words, in the ideal case, if we select an alternative x at random, then with probability 1, we satisfy the above constraint. Main ideas. The problem is that we cannot satisfy all these constraints with probability 1. A natural idea is thus to find x0 for which the probability of satisfying these constraints is as high as possible. Let us describe two approaches to formulating this idea (i.e., the corresponding probability) is precise terms. First approach: probability to satisfy all n constraints. The first approach is to look for the probability that for a randomly selected alternative x, we have fi(x0) ≥ fi(x) for all i. Second approach: probability to satisfy a randomly selected constraint. An alternative approach is to look for the probability that for a randomly selected alternative x and for a randomly selected objective i, we have fi(x0) ≥ fi(x). How to formulate these two ideas in precise terms. To formulate the above two ideas in precise terms, we need to estimate two probabilities: – the probability pI(x0) that for a randomly selected x, we have fi(x0) ≥ fi(x) for all i, and – the probability pII(x0) that for a randomly selected x and a randomly selected i, we have fi(x0) ≥ fi(x). Let us estimate the first probability. Since we do not have any prior information about the dependence between different objective functions fi(x) and fj(x), i ̸= j, it is reasonable to assume that the events fi(x0) ≥ fi(x) and fj(x0) ≥ fj(x) are independent for different i and j. Thus, the desired probability pI(x0) that all n such inequalities are satisfied can be estimated as the product pI(x0) = n ∏ i=1 pi(x0) of n probabilities pi of satisfying the corresponding inequalities. So, to estimate p, it is sufficient to estimate, for every i, the probability pi(x0) that fi(x0) ≥ fi(x) for a randomly selected alternative x. How can we estimate this probability pi(x0)? Again, in general, we do not have much prior knowledge of the i-th objective function fi(x). What do we know? Before starting to solve this problem as a multi-objective optimization problem, we probably tried to simply optimize each of the objective functions – hoping that the corresponding solution would also optimize all other objective functions. Since we are interesting in maximizing, this means that we know the largest possible value Mi of each of the objective functions: Mi = max x fi(x). In many practical cases, the optimum can be attained by differentiating the objective function and equating all its derivatives to 0. This is, for example, how the Least Squares method works: to optimize the quadratic function that describes how well the model fits the data, we solve the system of linear equations obtained by equating all partial derivatives to 0. It is important to mention that when we consider the points where all the partial derivatives are equal to 0, we find not only maxima but also minima of the objective function. Thus, it is reasonable to assume that in the process of maximizing each objective function fi(x), in addition to this function’s maximum, we also compute its minimum mi = min x fi(x). Since we know the smallest possible value mi of the objective function fi(x), and we know its largest possible value Mi, we thus know that the value fi(x) corresponding to a randomly selected alternative x must lie inside the interval [mi,Mi]. In effect, this is all the information that we have: that the random value fi(x) is somewhere in the interval [mi,Mi]. Since we do not have any reason to believe that some values from this interval are more probable and some values are less probable, it is reasonable to assume that all the values from this interval are equally probable, i.e., that we have a uniform distribution on the interval [mi,Mi]. This argument – known as Laplace Indeterminacy Principle – can be formalized as selecting the distribution with the probability density ρ(x) for which the entropy S = − ∫ ρ(x) · ln(ρ(x)) dx is the largest possible. One can check that for distributions on the given interval, the uniform distribution is the one with the largest entropy [6]. For the uniform distribution on the values fi(x) ∈ [mi,Mi], the probability pi(x0) that the random value fi(x) does not exceed fi(x0), i.e., belongs to the subinterval [mi, fi(x0)], is equal to the ratio of the corresponding intervals, i.e., to pi(x0) = fi(x0)−mi Mi −mi . Thus, the desired probability pI(x0) is equal to the product of such probabilities. So, we arrive at the following precise formulation of the first idea: Precise formulation of the first idea. To solve a multi-objective optimization problem (2), we find a value x0 for which the product pI(x0) = n ∏ i=1 fi(x0)−mi Mi −mi attains the largest possible value, where mi def = min x fi(x) and Mi def = max x fi(x). Let us estimate the second probability. In the second approach, we select the objective function fi at random. Since we have no reason to prefer one of the n objective functions, it makes sense to select each of these n functions with equal probability 1 n . For each selection of the objective function i, we know the probability pi(x0) = fi(x0)−mi Mi −mi that we will have fi(x0) ≥ fi(x) for a randomly selected alternative x. The probability of selecting each objective function fi(x) is equal to 1 n . Thus, we can use the complete probability formula to compute the desired probability pII(x0): Precise formulation of the second idea. To solve a multi-objective optimization problem (2), we find a value x0 for which the expression pII(x0) = n ∑ i=1 1 n · fi(x0)−mi Mi −mi attains the largest possible value. Discussion. Let us show that both ideas lead to known (and widely used) methods for solving multi-objective optimization problems. The second idea leads to optimizing a linear combination of objective functions. Let us start with analyzing the second idea, since the resulting formula with the sum looks somewhat simpler than the product-based formula corresponding to the first idea. In the case of the second idea, the optimized value pII(x0) is a linear combination of n objective functions – to be more precise, it is an arithmetic average of the objective functions normalized in such a way that their values are within the interval [0, 1]: pII(x0) = 1

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Chance Constraint Approach to Multi Response Optimization Based on a Network Data Envelopment Analysis

In this paper, a novel approach for multi response optimization is presented. In the proposed approach, response variables in treatments combination occur with a certain probability. Moreover, we assume that each treatment has a network style. Because of the probabilistic nature of treatment combination, the proposed approach can compute the efficiency of each treatment under the desirable reli...

متن کامل

Multi-objective scheduling and assembly line balancing with resource constraint and cost uncertainty: A “box” set robust optimization

Assembly lines are flow-oriented production systems that are of great importance in the industrial production of standard, high-volume products and even more recently, they have become commonplace in producing low-volume custom products. The main goal of designers of these lines is to increase the efficiency of the system and therefore, the assembly line balancing to achieve an optimal system i...

متن کامل

FGP approach to multi objective quadratic fractional programming problem

Multi objective quadratic fractional programming (MOQFP) problem involves optimization of several objective functions in the form of a ratio of numerator and denominator functions which involve both contains linear and quadratic forms with the assumption that the set of feasible solutions is a convex polyhedral with a nite number of extreme points and the denominator part of each of the objecti...

متن کامل

Multi-Objective Stochastic Programming in Microgrids Considering Environmental Emissions

This paper deals with day-ahead programming under uncertainties in microgrids (MGs). A two-stage stochastic programming with the fixed recourse approach was adopted. The studied MG was considered in the grid-connected mode with the capability of power exchange with the upstream network. Uncertain electricity market prices, unpredictable load demand, and uncertain wind and solar power values, du...

متن کامل

Solving a New Multi-objective Inventory-Routing Problem by a Non-dominated Sorting Genetic Algorithm

This paper considers a multi-period, multi-product inventory-routing problem in a two-level supply chain consisting of a distributor and a set of customers. This problem is modeled with the aim of minimizing bi-objectives, namely the total system cost (including startup, distribution and maintenance costs) and risk-based transportation. Products are delivered to customers by some heterogeneous ...

متن کامل

A Two-Phase Simulation-Based Optimization of Hauling System in Open-Pit Mine

One of the key issues in mining is the hauling system. Truck and shovels are the most widely used transportation equipment in mines. In this paper, a two-phase simulation-based optimization is presented to maximize utilization of hauling system in the largest Iranian open-pit copper mine. In the first phase, The OptQuest for Arena software package was used to solve the optimization problem to p...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015